Image captioning models require the high-level generalization ability to describe the contents of various images in words. Most existing approaches treat the image-caption pairs equally in their training without considering the differences in their learning difficulties. Several image captioning approaches introduce curriculum learning methods that present training data with increasing levels of difficulty. However, their difficulty measurements are either based on domain-specific features or prior model training. In this paper, we propose a simple yet efficient difficulty measurement for image captioning using cross-modal similarity calculated by a pretrained vision-language model. Experiments on the COCO and Flickr30k datasets show that our proposed approach achieves superior performance and competitive convergence speed to baselines without requiring heuristics or incurring additional training costs. Moreover, the higher model performance on difficult examples and unseen data also demonstrates the generalization ability.
translated by 谷歌翻译
元强化学习(META-RL)是一种有前途的方法,使代理商能够快速学习新任务。但是,由于仅由奖励提供的任务信息不足,大多数元元素算法在多任任务方案中显示出较差的概括。语言条件的元RL通过匹配语言指令和代理的行为来改善概括。因此,从对称性学习是人类学习的一种重要形式,因此将对称性和语言指令结合到元素rl可以帮助提高算法的概括和学习效率。因此,我们提出了一种双MDP元提升学习方法,该方法可以通过对称数据和语言指令有效地学习新任务。我们在多个具有挑战性的操作任务中评估了我们的方法,实验结果表明我们的方法可以大大提高元强化学习的概括和效率。
translated by 谷歌翻译
许多现实世界图包含时域信息。时间图神经网络在生成的动态节点嵌入中捕获时间信息以及结构和上下文信息。研究人员表明,这些嵌入在许多不同的任务中实现了最先进的表现。在这项工作中,我们提出了TGL,这是一个用于大规模脱机时间图神经网络训练的统一框架,用户可以使用简单的配置文件组成各种时间图神经网络。 TGL包括五个主要组件,一个临时采样器,一个邮箱,节点内存模块,存储器更新程序和消息传递引擎。我们设计了临时CSR数据结构和平行采样器,以有效地对颞邻邻居进行制作微型批次。我们提出了一种新颖的随机块调度技术,该技术可以减轻大批量训练时过时的节点存储器的问题。为了解决仅在小规模数据集上评估当前TGNN的局限性,我们介绍了两个具有0.2亿和13亿个时间边缘的大型现实世界数据集。我们在四个具有单个GPU的小规模数据集上评估了TGL的性能,以及两个具有多个GPU的大数据集,用于链接预测和节点分类任务。我们将TGL与五种方法的开源代码进行了比较,并表明TGL平均达到13倍的速度可实现相似或更高的精度。与基准相比,我们的时间平行采样器在多核CPU上平均达到173倍加速。在4-GPU机器上,TGL可以在1-10小时内训练一个超过10亿个时间边缘的时期。据我们所知,这是第一项提出了一个关于多个GPU的大规模时间图神经网络培训的一般框架的工作。
translated by 谷歌翻译
Graph Convolutional Networks (GCNs) are powerful models for learning representations of attributed graphs. To scale GCNs to large graphs, state-of-the-art methods use various layer sampling techniques to alleviate the "neighbor explosion" problem during minibatch training. We propose GraphSAINT, a graph sampling based inductive learning method that improves training efficiency and accuracy in a fundamentally different way. By changing perspective, GraphSAINT constructs minibatches by sampling the training graph, rather than the nodes or edges across GCN layers. Each iteration, a complete GCN is built from the properly sampled subgraph. Thus, we ensure fixed number of well-connected nodes in all layers. We further propose normalization technique to eliminate bias, and sampling algorithms for variance reduction. Importantly, we can decouple the sampling from the forward and backward propagation, and extend GraphSAINT with many architecture variants (e.g., graph attention, jumping connection). GraphSAINT demonstrates superior performance in both accuracy and training time on five large graphs, and achieves new state-of-the-art F1 scores for PPI (0.995) and Reddit (0.970).
translated by 谷歌翻译
Digitization of scanned receipts aims to extract text from receipt images and save it into structured documents. This is usually split into two sub-tasks: text localization and optical character recognition (OCR). Most existing OCR models only focus on the cropped text instance images, which require the bounding box information provided by a text region detection model. Introducing an additional detector to identify the text instance images in advance is inefficient, however instance-level OCR models have very low accuracy when processing the whole image for the document-level OCR, such as receipt images containing multiple text lines arranged in various layouts. To this end, we propose a localization-free document-level OCR model for transcribing all the characters in a receipt image into an ordered sequence end-to-end. Specifically, we finetune the pretrained Transformer-based instance-level model TrOCR with randomly cropped image chunks, and gradually increase the image chunk size to generalize the recognition ability from instance images to full-page images. In our experiments on the SROIE receipt OCR dataset, the model finetuned with our strategy achieved 64.4 F1-score and a 22.8% character error rates (CER) on the word-level and character-level metrics, respectively, which outperforms the baseline results with 48.5 F1-score and 50.6% CER. The best model, which splits the full image into 15 equally sized chunks, gives 87.8 F1-score and 4.98% CER with minimal additional pre or post-processing of the output. Moreover, the characters in the generated document-level sequences are arranged in the reading order, which is practical for real-world applications.
translated by 谷歌翻译
脑MRI图像的登记需要解决变形领域,这对于对准复杂的脑组织,例如皮质核等,这是极其困难的现有努力,该努力在具有微小运动的中间子场中分解目标变形领域,即逐步登记阶段或较低的分辨率,即全尺寸变形场的粗析估计。在本文中,我们认为这些努力不是相互排斥的,并为普通和粗良好的方式同时提出统一的脑MRI登记统一框架。具体地,在双编码器U-Net上构建,定制移动的MRI对被编码和解码成从粗略到精细的多尺度变形子字段。每个解码块包含两个提出的新颖模块:i)在变形场积分(DFI)中,计算单个集成子字段,翘曲,其等同于来自所有先前解码块的子字段逐渐翘曲,并且II)非刚性特征融合(NFF),固定移动对的特征由DFI集成子场对齐,然后融合以预测更精细的子场。利用DFI和NFF,目标变形字段被修改为多尺度子场,其中较粗糙的字段缓解了更精细的一个和更精细的字段的估计,以便构成以前粗糙的较粗糙的那些错位。私人和公共数据集的广泛和全面的实验结果展示了脑MRI图像的优越的登记性能,仅限于逐步登记和粗略估计,平均骰子的粗略估计数量在最多8%上升。
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Text clustering and topic extraction are two important tasks in text mining. Usually, these two tasks are performed separately. For topic extraction to facilitate clustering, we can first project texts into a topic space and then perform a clustering algorithm to obtain clusters. To promote topic extraction by clustering, we can first obtain clusters with a clustering algorithm and then extract cluster-specific topics. However, this naive strategy ignores the fact that text clustering and topic extraction are strongly correlated and follow a chicken-and-egg relationship. Performing them separately fails to make them mutually benefit each other to achieve the best overall performance. In this paper, we propose an unsupervised text clustering and topic extraction framework (ClusTop) which integrates text clustering and topic extraction into a unified framework and can achieve high-quality clustering result and extract topics from each cluster simultaneously. Our framework includes four components: enhanced language model training, dimensionality reduction, clustering and topic extraction, where the enhanced language model can be viewed as a bridge between clustering and topic extraction. On one hand, it provides text embeddings with a strong cluster structure which facilitates effective text clustering; on the other hand, it pays high attention on the topic related words for topic extraction because of its self-attention architecture. Moreover, the training of enhanced language model is unsupervised. Experiments on two datasets demonstrate the effectiveness of our framework and provide benchmarks for different model combinations in this framework.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译